363 research outputs found

    Object completion effects in attention and memory

    Get PDF

    Detect live salmonella cells in produce by coupling propidium monoazide with loop-mediated isothermal amplification (PMA-LAMP)

    Get PDF
    Salmonella is a leading cause of foodborne illnesses worldwide. In recent years, an increasing number of Salmonella-related outbreaks in produce has been reported. It is therefore important that the produce industry be equipped with rapid, sensitive, specific detection methods for live Salmonella cells in produce to better ensure the produce safety. In this study, we first designed and optimized a loop-mediated isothermal amplification (LAMP) assay for Salmonella detection by targeting the invasion gene (invA). Then we incorporated a chemical reagent, propidium monoazide (PMA) into the sample preparation step to prevent LAMP amplification of dead Salmonella cells. To our knowledge, this is the first study that combined these two novel technologies for live bacterial detection. The PMA-LAMP was evaluated for false positive exclusivity, sensitivity, and quantitative capability. Finally, the PMA-LAMP assay was applied to detect live Salmonella cells in the presence of dead cells in several produce items (cantaloupe, spinach, and tomato). The invA-based PMA-LAMP could avoid detecting heat-killed dead Salmonella cells up to 7.5 × 105 CFU per reaction and could detect down to 3.4 - 34 live Salmonella cells in the presence of 7.5 × 103 heat-killed dead Salmonella cells per reaction in pure culture with good quantitative capability (r2 = 0.983). When applied to produce testing, the assay could avoid detecting heat-killed dead Salmonella cells up to 3.75 × 108 CFU/g and could successfully detect down to 5.5 × 103 - 5.5 × 104 CFU/g of live Salmonella cells in the presence of 3.75 × 106 CFU/g of heat-killed Salmonella cells with good quantitative capability (r2 = 0.993 - 0.949). The total assay time was 3 hours. When compared with PMA-PCR, the PMA-LAMP assay was 10 to 100-fold more sensitive, 2-hour shorter, and technically simpler. In conclusion, the invA-based PMA-LAMP assay developed in this study was an effective tool to specifically detect live Salmonella cells in produce with high sensitivity and quantitative capability

    Understanding Emojis for Financial Sentiment Analysis

    Get PDF
    Social media content has been widely used for financial forecasting and sentiment analysis. However, emojis as a new “lingua franca” on social media are often omitted during standard data pre-processing processes, we thus speculate that they may carry additional useful information. In this research, we study the effect of emojis in facilitating financial sentiment analysis and explore the most effective way to handle them during model training. Experiments are conducted on two datasets from stock and crypto markets. Various machine learning models, deep learning models, and the state-of-the-art GPT-based model are used, and we compare their performances across different emoji encodings. Results show a consistent increase in model performances when emojis are converted to their descriptive phrases, and significant enhancements after refining the descriptive terms of the most important emojis before fitting them into the models. Our research shows that emojis are a valuable source for better understanding financial social media texts that cannot be omitted

    Object completion effects in attention and memory

    Get PDF

    Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

    Get PDF
    Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing `contextual cueing'. This effect was enhanced in the multisensory session---importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift--diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone

    Cross-modal contextual memory guides selective attention in visual-search tasks

    Get PDF
    Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items (contextual cueing). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively;both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target
    corecore